97 research outputs found

    Watermarking for multimedia security using complex wavelets

    Get PDF
    This paper investigates the application of complex wavelet transforms to the field of digital data hiding. Complex wavelets offer improved directional selectivity and shift invariance over their discretely sampled counterparts allowing for better adaptation of watermark distortions to the host media. Two methods of deriving visual models for the watermarking system are adapted to the complex wavelet transforms and their performances are compared. To produce improved capacity a spread transform embedding algorithm is devised, this combines the robustness of spread spectrum methods with the high capacity of quantization based methods. Using established information theoretic methods, limits of watermark capacity are derived that demonstrate the superiority of complex wavelets over discretely sampled wavelets. Finally results for the algorithm against commonly used attacks demonstrate its robustness and the improved performance offered by complex wavelet transforms

    Algorithmic processing to aid in leukemia detection

    Get PDF
    Background: I present our medical context with some basic concepts in order to understand the results of our work, and then I begin the explanation of mathematical morphology. I will conclude by the description of algorithmic processing propose in this paper. Cancers, including leukemia and lymphoma, can cause uncontrolled growth of an abnormal type of blood cell in the bone marrow, resulting in a greatly increased risk for infection and or serious bleeding. Methods: We present detailed steps of our proposed systems, to obtain a final result that shows the detection of abnormal cells. It typically starts with a median filter pre-processing step and then applies different morphologic operator, which allows us to segment the original image and detect cancerous cells. The basic idea behind all the operators in the mathematical morphology is to compare the set of objects to analyze another object of known form, which is called a structuring element. The structuring element is a geometric figure, simple to form, known or arbitrary, and can be a circle, segment, square, or triangle. Results: We show the different results obtained after testing carried out in algorithmic processing using MATLAB: To ameliorate the visualization of the abnormal blood cells, we have applied the elements basis morphological operations in a different way. We have performed an opening by reconstruction and a closing by reconstruction. The obtained result show that we have obtained an efficient detection of the targeted objects (abnormal blood cells or leukemia). Conclusion: In this paper, we have utilized the operators of the mathematical morphology with the aim to detect abnormal cells for diagnostic aid and transmission of accurate and precise clinical information, which helps specialists in medicine (hematologists) to distinguish abnormal cells or cancerous and to follow the evolution of leukemia. The algorithmic processing presented in this article has been able to perform the task of detection of cancerous cells with success; it has produced remarkable and satisfactory results. We think of the future concept as a system of aid for diagnosis from microelectronics integration to the base of reconfigurable technologies applied to cells for the goal of quantification of the cancer region

    Multi texture analysis of colorectal cancer continuum using multispectral imagery

    Get PDF
    Purpose This paper proposes to characterize the continuum of colorectal cancer (CRC) using multiple texture features extracted from multispectral optical microscopy images. Three types of pathological tissues (PT) are considered: benign hyperplasia, intraepithelial neoplasia and carcinoma. Materials and Methods In the proposed approach, the region of interest containing PT is first extracted from multispectral images using active contour segmentation. This region is then encoded using texture features based on the Laplacian-of-Gaussian (LoG) filter, discrete wavelets (DW) and gray level co-occurrence matrices (GLCM). To assess the significance of textural differences between PT types, a statistical analysis based on the Kruskal-Wallis test is performed. The usefulness of texture features is then evaluated quantitatively in terms of their ability to predict PT types using various classifier models. Results Preliminary results show significant texture differences between PT types, for all texture features (p-value < 0.01). Individually, GLCM texture features outperform LoG and DW features in terms of PT type prediction. However, a higher performance can be achieved by combining all texture features, resulting in a mean classification accuracy of 98.92%, sensitivity of 98.12%, and specificity of 99.67%. Conclusions These results demonstrate the efficiency and effectiveness of combining multiple texture features for characterizing the continuum of CRC and discriminating between pathological tissues in multispectral images

    Vers un partitionnement heuristique pour le mapping de data : path sur FPGA reconfiguré dynamiquement

    Get PDF
    Le problĂšme traite du partitionnement temporel des algorithmes de TSI ayant une contrainte temps rĂ©el, en vue de leurs implantations optimisĂ©es sur architecture reconfigurable dynamiquement. Nous prĂ©sentons une mĂ©thode, pouvant ĂȘtre appliquĂ©e heuristiquement, pour dĂ©terminer les Ă©tapes de la reconfiguration dynamique (RD) et un partitionnement adaptĂ© Ă  l'implantation d'un data-path (chemin de donnĂ©es). L'Ă©valuation de certaines caractĂ©ristiques liĂ©es Ă  la technologie et la proposition d'une mĂ©thode appliquĂ©e heuristiquement, nous permettent un partitionnement optimisant les ressources matĂ©rielles en RD

    Formal Verification of Fault Tolerant NoC-based Architecture

    Get PDF
    International audienceApproaches to design fault tolerant Network-on-Chip (NoC) for System-on-Chip(SoC)-based reconfigurable Field-Programmable Gate Array (FPGA) technology are challenges on the conceptualisation of the Multiprocessor System-on-Chip (MPSoC) design. For this purpose, the use of rigorous formal approaches, based on incremental design and proof theory, has become an essential step in a validation architecture. The Event-B formal method is a promising formal approach that can be used to develop, model and prove accurately the domain of SoCs and MPSoCs. This paper gives a formal verification of a NoC architecture, using the Event-B methodology. The formalisation process is based on an incremental and validated correct-by-construction development of the NoC architecture

    Digital Implementation of an Improved LTE Stream Cipher Snow-3G Based on Hyperchaotic PRNG

    Get PDF
    SNOW-3G is a stream cipher used by the 3GPP standards as the core part of the confidentiality and integrity algorithms for UMTS and LTE networks. This paper proposes an enhancement of the regular SNOW-3G ciphering algorithm based on HC-PRNG. The proposed cipher scheme is based on hyperchaotic generator which is used as an additional layer to the SNOW-3G architecture to improve the randomness of its output keystream. The objective of this work is to achieve a high security strength of the regular SNOW-3G algorithm while maintaining its standardized properties. The originality of this new scheme is that it provides a good trade-off between good randomness properties, performance, and hardware resources. Numerical simulations, hardware digital implementation, and experimental results using Xilinx FPGA Virtex technology have demonstrated the feasibility and the efficiency of our secure solution while promising technique can be applied to secure the new generation mobile standards. Thorough analysis of statistical randomness is carried out demonstrating the improved statistical randomness properties of the new scheme compared to the standard SNOW-3G, while preserving its resistance against cryptanalytic attacks

    Explainable, Domain-Adaptive, and Federated Artificial Intelligence in Medicine

    Full text link
    Artificial intelligence (AI) continues to transform data analysis in many domains. Progress in each domain is driven by a growing body of annotated data, increased computational resources, and technological innovations. In medicine, the sensitivity of the data, the complexity of the tasks, the potentially high stakes, and a requirement of accountability give rise to a particular set of challenges. In this review, we focus on three key methodological approaches that address some of the particular challenges in AI-driven medical decision making. (1) Explainable AI aims to produce a human-interpretable justification for each output. Such models increase confidence if the results appear plausible and match the clinicians expectations. However, the absence of a plausible explanation does not imply an inaccurate model. Especially in highly non-linear, complex models that are tuned to maximize accuracy, such interpretable representations only reflect a small portion of the justification. (2) Domain adaptation and transfer learning enable AI models to be trained and applied across multiple domains. For example, a classification task based on images acquired on different acquisition hardware. (3) Federated learning enables learning large-scale models without exposing sensitive personal health information. Unlike centralized AI learning, where the centralized learning machine has access to the entire training data, the federated learning process iteratively updates models across multiple sites by exchanging only parameter updates, not personal health data. This narrative review covers the basic concepts, highlights relevant corner-stone and state-of-the-art research in the field, and discusses perspectives.Comment: This paper is accepted in IEEE CAA Journal of Automatica Sinica, Nov. 10 202

    Hyperchaotic synchronization and encryption for secure of Data wireless communication

    No full text
    International audienc
    • 

    corecore